🛠️ All DevTools

Showing 481–500 of 4297 tools

Last Updated
April 24, 2026 at 12:00 AM

jarrodwatts/claude-hud

GitHub Trending

[Other] A Claude Code plugin that shows what's happening - context usage, active tools, running agents, and todo progress

Found: March 17, 2026 ID: 3805

Building a Shell

Hacker News (score: 110)

[Other] Building a Shell

Found: March 17, 2026 ID: 3807

[CLI Tool] Show HN: Pgit – A Git-like CLI backed by PostgreSQL

Found: March 17, 2026 ID: 3821

[Other] Flash-KMeans: Fast and Memory-Efficient Exact K-Means

Found: March 17, 2026 ID: 3842

[CLI Tool] Show HN: Crust – A CLI framework for TypeScript and Bun We&#x27;ve been building Crust (<a href="https:&#x2F;&#x2F;crustjs.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;crustjs.com&#x2F;</a>), a TypeScript-first, Bun-native CLI framework with zero dependencies. It&#x27;s been powering our core product internally for a while, and we&#x27;re now open-sourcing it.<p>The problem we kept running into: existing CLI frameworks in the JS ecosystem are either minimal arg parsers where you wire everything yourself, or heavyweight frameworks with large dependency trees and Node-era assumptions. We wanted something in between.<p>What Crust does differently:<p>- Full type inference from definitions — args and flags are inferred automatically. No manual type annotations, no generics to wrangle. You define a flag as type: &quot;string&quot; and it flows through to your handler.<p>- Compile-time validation — catches flag alias collisions and variadic arg mistakes before your code runs, not at runtime.<p>- Zero runtime dependencies — @crustjs&#x2F;core is ~3.6kB gzipped (21kB install). For comparison: yargs is 509kB, oclif is 411kB.<p>- Composable modules — core, plugins, prompts, styling, validation, and build tooling are all separate packages. Install only what you need.<p>- Plugin system — middleware-based with lifecycle hooks (preRun&#x2F;postRun). Official plugins for help, version, and shell autocompletion.<p>- Built for Bun — no Node compatibility layers, no legacy baggage.<p>Quick example:<p><pre><code> import { Crust } from &quot;@crustjs&#x2F;core&quot;; import { helpPlugin, versionPlugin } from &quot;@crustjs&#x2F;plugins&quot;; const main = new Crust(&quot;greet&quot;) .args([{ name: &quot;name&quot;, type: &quot;string&quot;, default: &quot;world&quot; }]) .flags({ shout: { type: &quot;boolean&quot;, short: &quot;s&quot; } }) .use(helpPlugin()) .use(versionPlugin(&quot;1.0.0&quot;)) .run(({ args, flags }) =&gt; { const msg = `Hello, ${args.name}!`; console.log(flags.shout ? msg.toUpperCase() : msg); }); await main.execute(); </code></pre> Scaffold a new project:<p><pre><code> bun create crust my-cli </code></pre> Site: <a href="https:&#x2F;&#x2F;crustjs.com" rel="nofollow">https:&#x2F;&#x2F;crustjs.com</a> GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;chenxin-yan&#x2F;crust" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;chenxin-yan&#x2F;crust</a><p>Happy to answer any questions about the design decisions or internals.

Found: March 17, 2026 ID: 3811

[Other] Video Encoding and Decoding with Vulkan Compute Shaders in FFmpeg

Found: March 17, 2026 ID: 3845

[Other] Leanstral: Open-source agent for trustworthy coding and formal proof engineering Lean 4 paper (2021): <a href="https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;10.1007&#x2F;978-3-030-79876-5_37" rel="nofollow">https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;10.1007&#x2F;978-3-030-79876-5_37</a>

Found: March 16, 2026 ID: 3801

[Other] Show HN: Most GPU Upgrades Aren't Worth It, I Built a Calculator to Prove It I run a small project called best-gpu.com, a site that ranks GPUs by price-to-performance.<p>While browsing PC building forums and Reddit, I kept seeing the same question: “What should I upgrade to from my current GPU?” Most answers are just lists of cards without showing the actual performance gain, so people often end up paying for upgrades that barely improve performance.<p>So I built a small tool: a GPU Upgrade Calculator.<p>You enter your current GPU and it shows:<p>estimated performance gain<p>a value score based on price vs performance<p>a filtered list of upgrade options (brand, price, VRAM, etc.)<p>The goal is simply to help people avoid spending money on upgrades that aren’t really worth it.<p>Curious to hear feedback from HN on the approach, data sources, or features that would make something like this more useful.<p><a href="https:&#x2F;&#x2F;best-gpu.com&#x2F;upgrade.php" rel="nofollow">https:&#x2F;&#x2F;best-gpu.com&#x2F;upgrade.php</a>

Found: March 16, 2026 ID: 3804

[API/SDK] Show HN: Open-source, extract any brand's logos, colors, and assets from a URL Hi everyone, I just open sourced OpenBrand - extract any brand&#x27;s logos, colors, and assets from just a URL.<p>It&#x27;s MIT licensed, open source, completely free. Try it out at openbrand.sh<p>It also comes with a free API and MCP server for you to use in your code or agents.<p>Why we built this: while building another product, we needed to pull in customers&#x27; brand images as custom backgrounds. It felt like a simple enough problem with no open source solution - so we built one.

Found: March 16, 2026 ID: 3800

[Other] Meta’s renewed commitment to jemalloc <a href="https:&#x2F;&#x2F;github.com&#x2F;jemalloc&#x2F;jemalloc" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jemalloc&#x2F;jemalloc</a>

Found: March 16, 2026 ID: 3798

[DevOps] Launch HN: Chamber (YC W26) – An AI Teammate for GPU Infrastructure Hey HN, we&#x27;re Jie Shen, Charles, Andreas, and Shaocheng. We built Chamber (<a href="https:&#x2F;&#x2F;usechamber.io">https:&#x2F;&#x2F;usechamber.io</a>), an AI agent that manages GPU infrastructure for you. You talk to it wherever your team already works and it handles things like provisioning clusters, diagnosing failed jobs, managing workloads. Demo: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=xdqh2C_hif4" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=xdqh2C_hif4</a><p>We all worked on GPU infrastructure at Amazon. Between us we&#x27;ve spent years on this problem — monitoring GPU fleets, debugging failures at scale, building the tooling around it. After leaving we talked to a bunch of AI teams and kept hearing the same stuff. Platform engineers spend half their time just keeping things running. Building dashboards, writing scheduling configs, answering &quot;when will my job start?&quot; all day. Researchers lose hours when a training run fails because figuring out why means digging through Kubernetes events, node logs, and GPU metrics in totally separate tools. Pretty much everyone had stitched together Prometheus, Grafana, Kubernetes scheduling policies, and a bunch of homegrown scripts, and they were spending as much time maintaining all of it as actually using it.<p>The thing we kept noticing is that most of this work follows patterns. Triage the failure, correlate a few signals, figure out what to do about it. If you had a platform with structured access to the full state of a GPU environment, you could have an agent do that work for you.<p>So that&#x27;s what we built. Chamber is a control plane that keeps a live model of your GPU fleet: nodes, workloads, team structure, cluster health. Every operation it supports is exposed as a tool the agent can call. Inspecting node health, reading cluster topology, managing workload lifecycle, adjusting resource configs, provisioning infrastructure. These are structured operations with validation and rollback, not just raw shell commands. When we add new capabilities to the platform, they automatically become things the agent can do too.<p>We spent a lot of time on safety because we&#x27;ve seen what happens when infrastructure automation goes wrong. A wrong call can kill a multi-day training run or cascade across a cluster. So the agent has graduated autonomy. Routine stuff it handles on its own: diagnosing a failed job, resubmitting with corrected resources, cordoning a bad node. But anything that touches other teams&#x27; workloads or production jobs needs human approval first. Every action gets logged with what the agent saw, why it acted, and what it changed.<p>The platform underneath is really what makes the diagnosis work. When the agent investigates a failure, it queries GPU state, workload history, node health timelines, and cluster topology. That&#x27;s the difference between &quot;your job OOMed&quot; and &quot;your job OOMed because the batch size exceeded available VRAM on this node, here&#x27;s a corrected config.&quot; Different root causes get different fixes.<p>One thing that surprised us, even coming from Amazon where we&#x27;d seen large GPU fleets: most teams we talk to can&#x27;t even tell you how many GPUs are in use right now. The monitoring just doesn&#x27;t exist. They&#x27;re flying blind on their most expensive hardware.<p>We’ve launched with a few early customers and are onboarding new teams. We’re still refining pricing and are currently evaluating models like per-GPU-under-management and tiered plans. We plan to publish transparent pricing once we’ve validated what works best for customers. In the meantime, we know “contact us” isn’t ideal.<p>Would love to hear from anyone running GPU clusters. What&#x27;s the most tedious part of your setup? What would you actually trust an agent to do? What&#x27;s off limits? Looking forward to feedback!

Found: March 16, 2026 ID: 3794

[Other] Show HN: Claude Code skills that build complete Godot games I’ve been working on this for about a year through four major rewrites. Godogen is a pipeline that takes a text prompt, designs the architecture, generates 2D&#x2F;3D assets, writes the GDScript, and tests it visually. The output is a complete, playable Godot 4 project.<p>Getting LLMs to reliably generate functional games required solving three specific engineering bottlenecks:<p>1. The Training Data Scarcity: LLMs barely know GDScript. It has ~850 classes and a Python-like syntax that will happily let a model hallucinate Python idioms that fail to compile. To fix this, I built a custom reference system: a hand-written language spec, full API docs converted from Godot&#x27;s XML source, and a quirks database for engine behaviors you can&#x27;t learn from docs alone. Because 850 classes blow up the context window, the agent lazy-loads only the specific APIs it needs at runtime.<p>2. The Build-Time vs. Runtime State: Scenes are generated by headless scripts that build the node graph in memory and serialize it to .tscn files. This avoids the fragility of hand-editing Godot&#x27;s serialization format. But it means certain engine features (like `@onready` or signal connections) aren&#x27;t available at build time—they only exist when the game actually runs. Teaching the model which APIs are available at which phase — and that every node needs its owner set correctly or it silently vanishes on save — took careful prompting but paid off.<p>3. The Evaluation Loop: A coding agent is inherently biased toward its own output. To stop it from cheating, a separate Gemini Flash agent acts as visual QA. It sees only the rendered screenshots from the running engine—no code—and compares them against a generated reference image. It catches the visual bugs text analysis misses: z-fighting, floating objects, physics explosions, and grid-like placements that should be organic.<p>Architecturally, it runs as two Claude Code skills: an orchestrator that plans the pipeline, and a task executor that implements each piece in a `context: fork` window so mistakes and state don&#x27;t accumulate.<p>Everything is open source: <a href="https:&#x2F;&#x2F;github.com&#x2F;htdt&#x2F;godogen" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;htdt&#x2F;godogen</a><p>Demo video (real games, not cherry-picked screenshots): <a href="https:&#x2F;&#x2F;youtu.be&#x2F;eUz19GROIpY" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;eUz19GROIpY</a><p>Blog post with the full story (all the wrong turns) coming soon. Happy to answer questions.

Found: March 16, 2026 ID: 3796

[CLI Tool] Apideck CLI – An AI-agent interface with much lower context consumption than MCP

Found: March 16, 2026 ID: 3793

Gluon: Explicit Performance

Hacker News (score: 17)

[Other] Gluon: Explicit Performance

Found: March 16, 2026 ID: 3837

YishenTu/claudian

GitHub Trending

[Other] An Obsidian plugin that embeds Claude Code as an AI collaborator in your vault

Found: March 16, 2026 ID: 3791

[Other] Agent harness built with LangChain and LangGraph. Equipped with a planning tool, a filesystem backend, and the ability to spawn subagents - well-equipped to handle complex agentic tasks.

Found: March 16, 2026 ID: 3790

volcengine/OpenViking

GitHub Trending

[Database] OpenViking is an open-source context database designed specifically for AI Agents(such as openclaw). OpenViking unifies the management of context (memory, resources, and skills) that Agents need through a file system paradigm, enabling hierarchical context delivery and self-evolving.

Found: March 16, 2026 ID: 3789

thedotmack/claude-mem

GitHub Trending

[Other] A Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future sessions.

Found: March 16, 2026 ID: 3788

[CLI Tool] Lazycut: A simple terminal video trimmer using FFmpeg

Found: March 16, 2026 ID: 3795

[Other] Toward automated verification of unreviewed AI-generated code

Found: March 16, 2026 ID: 3809
Previous Page 25 of 215 Next